695 research outputs found
How stable are transport model results to changes of resonance parameters? A UrQMD model study
The Ultrarelativistic Quantum Molecular Dynamics [UrQMD] model is widely used
to simulate heavy ion collisions in broad energy ranges. It consists of various
components to implement the different physical processes underlying the
transport approach. A major building block are the shared tables of constants,
implementing the baryon masses and widths. Unfortunately, many of these input
parameters are not well known experimentally. In view of the upcoming physics
program at FAIR, it is therefore of fundamental interest to explore the
stability of the model results when these parameters are varied. We perform a
systematic variation of particle masses and widths within the limits proposed
by the particle data group (or up to 10%). We find that the model results do
only weakly depend on the variation of these input parameters. Thus, we
conclude that the present implementation is stable with respect to the
modification of not yet well specified particle parameters
Lattice QCD based on OpenCL
We present an OpenCL-based Lattice QCD application using a heatbath algorithm
for the pure gauge case and Wilson fermions in the twisted mass formulation.
The implementation is platform independent and can be used on AMD or NVIDIA
GPUs, as well as on classical CPUs. On the AMD Radeon HD 5870 our double
precision dslash implementation performs at 60 GFLOPS over a wide range of
lattice sizes. The hybrid Monte-Carlo presented reaches a speedup of four over
the reference code running on a server CPU.Comment: 19 pages, 11 figure
Fast TPC Online Tracking on GPUs and Asynchronous Data Processing in the ALICE HLT to facilitate Online Calibration
ALICE (A Large Heavy Ion Experiment) is one of the four major experiments at
the Large Hadron Collider (LHC) at CERN, which is today the most powerful
particle accelerator worldwide. The High Level Trigger (HLT) is an online
compute farm of about 200 nodes, which reconstructs events measured by the
ALICE detector in real-time. The HLT uses a custom online data-transport
framework to distribute data and workload among the compute nodes. ALICE
employs several calibration-sensitive subdetectors, e.g. the TPC (Time
Projection Chamber). For a precise reconstruction, the HLT has to perform the
calibration online. Online-calibration can make certain Offline calibration
steps obsolete and can thus speed up Offline analysis. Looking forward to ALICE
Run III starting in 2020, online calibration becomes a necessity. The main
detector used for track reconstruction is the TPC. Reconstructing the
trajectories in the TPC is the most compute-intense step during event
reconstruction. Therefore, a fast tracking implementation is of great
importance. Reconstructed TPC tracks build the basis for the calibration making
a fast online-tracking mandatory. We present several components developed for
the ALICE High Level Trigger to perform fast event reconstruction and to
provide features required for online calibration. As first topic, we present
our TPC tracker, which employs GPUs to speed up the processing, and which bases
on a Cellular Automaton and on the Kalman filter. Our TPC tracking algorithm
has been successfully used in 2011 and 2012 in the lead-lead and the
proton-lead runs. We have improved it to leverage features of newer GPUs and we
have ported it to support OpenCL, CUDA, and CPUs with a single common source
code. This makes us vendor independent. As second topic, we present framework
extensions required for online calibration. ...Comment: 8 pages, 6 figures, contribution to CHEP 2015 conferenc
BioEM: GPU-accelerated computing of Bayesian inference of electron microscopy images
In cryo-electron microscopy (EM), molecular structures are determined from
large numbers of projection images of individual particles. To harness the full
power of this single-molecule information, we use the Bayesian inference of EM
(BioEM) formalism. By ranking structural models using posterior probabilities
calculated for individual images, BioEM in principle addresses the challenge of
working with highly dynamic or heterogeneous systems not easily handled in
traditional EM reconstruction. However, the calculation of these posteriors for
large numbers of particles and models is computationally demanding. Here we
present highly parallelized, GPU-accelerated computer software that performs
this task efficiently. Our flexible formulation employs CUDA, OpenMP, and MPI
parallelization combined with both CPU and GPU computing. The resulting BioEM
software scales nearly ideally both on pure CPU and on CPU+GPU architectures,
thus enabling Bayesian analysis of tens of thousands of images in a reasonable
time. The general mathematical framework and robust algorithms are not limited
to cryo-electron microscopy but can be generalized for electron tomography and
other imaging experiments
Online Calibration of the TPC Drift Time in the ALICE High Level Trigger
ALICE (A Large Ion Collider Experiment) is one of four major experiments at
the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is a
compute cluster, which reconstructs collisions as recorded by the ALICE
detector in real-time. It employs a custom online data-transport framework to
distribute data and workload among the compute nodes.
ALICE employs subdetectors sensitive to environmental conditions such as
pressure and temperature, e.g. the Time Projection Chamber (TPC). A precise
reconstruction of particle trajectories requires the calibration of these
detectors. Performing the calibration in real time in the HLT improves the
online reconstructions and renders certain offline calibration steps obsolete
speeding up offline physics analysis. For LHC Run 3, starting in 2020 when data
reduction will rely on reconstructed data, online calibration becomes a
necessity. Reconstructed particle trajectories build the basis for the
calibration making a fast online-tracking mandatory. The main detectors used
for this purpose are the TPC and ITS (Inner Tracking System). Reconstructing
the trajectories in the TPC is the most compute-intense step.
We present several improvements to the ALICE High Level Trigger developed to
facilitate online calibration. The main new development for online calibration
is a wrapper that can run ALICE offline analysis and calibration tasks inside
the HLT. On top of that, we have added asynchronous processing capabilities to
support long-running calibration tasks in the HLT framework, which runs
event-synchronously otherwise. In order to improve the resiliency, an isolated
process performs the asynchronous operations such that even a fatal error does
not disturb data taking. We have complemented the original loop-free HLT chain
with ZeroMQ data-transfer components. [...]Comment: 8 pages, 10 figures, proceedings to 2016 IEEE-NPSS Real Time
Conferenc
Autonomic Management of Large Clusters and Their Integration into the Grid
We present a framework for the co-ordinated, autonomic management of multiple clusters in a compute center and their integration into a Grid environment. Site autonomy and the automation of administrative tasks are prime aspects in this framework. The system behavior is continuously monitored in a steering cycle and appropriate actions are taken to resolve any problems. All presented components have been implemented in the course of the EU project DataGrid: The Lemon monitoring components, the FT fault-tolerance mechanism, the quattor system for software installation and configuration, the RMS job and resource management system, and the Gridification scheme that integrates clusters into the Grid
Benchmarks and implementation of the ALICE high level trigger
The ALICE high level trigger combines and processes the full information from all major detectors in a large computer cluster. Data rate reduction is achieved by reducing the event rate by selecting interesting events (software trigger) and by reducing the event size by selecting sub-events and by advanced data compression. Reconstruction chains for the barrel detectors and the forward muon spectrometer have been benchmarked. The HLT receives a replica of the raw data via the standard ALICE DDL link into a custom PCI receiver card (HLT-RORC). These boards also provide a FPGA co-processor for data-intensive tasks of pattern recognition. Some of the pattern recognition algorithms (cluster finder, Hough transformation) have been re-designed in VHDL to be executed in the Virtex-4 FPGA on the HLT-RORC. HLT prototypes were operated during the beam tests of the TPC and TRD detectors. The input and output interfaces to DAQ and the data flow inside of HLT were successfully tested. A full-scale prototype of the dimuon-HLT achieved the expected data flow performance. This system was finally embedded in a GRID-like system of several distributed clusters demonstrating the scalability and fault-tolerance of the HL
- …